Youden's J Statistic
   HOME

TheInfoList



OR:

Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a
dichotomous A dichotomy is a partition of a whole (or a set) into two parts (subsets). In other words, this couple of parts must be * jointly exhaustive: everything must belong to one part or the other, and * mutually exclusive: nothing can belong simultan ...
diagnostic test. Informedness is its generalization to the multiclass case and estimates the probability of an informed decision.


Definition

Youden's ''J'' statistic is : J = \text + \text -1 with the two right-hand quantities being
sensitivity and specificity ''Sensitivity'' and ''specificity'' mathematically describe the accuracy of a test which reports the presence or absence of a condition. Individuals for which the condition is satisfied are considered "positive" and those for which it is not are ...
. Thus the expanded formula is: : J = \frac+\frac-1 The index was suggested by W.J. Youden in 1950 as a way of summarising the performance of a diagnostic test, however the formula was earlier published in Science by C.S.Pierce in 1884. Its value ranges from -1 through 1 (inclusive), and has a zero value when a diagnostic test gives the same proportion of positive results for groups with and without the disease, i.e the test is useless. A value of 1 indicates that there are no false positives or false negatives, i.e. the test is perfect. The index gives equal weight to false positive and false negative values, so all tests with the same value of the index give the same proportion of total misclassified results. While it is possible to obtain a value of less than zero from this equation, e.g. Classification yields only False Positives and False Negatives, a value of less than zero just indicates that the positive and negative labels have been switched. After correcting the labels the result will then be in the 0 through 1 range. Youden's index is often used in conjunction with
receiver operating characteristic A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of ...
(ROC) analysis. The index is defined for all points of an ROC curve, and the maximum value of the index may be used as a criterion for selecting the optimum cut-off point when a diagnostic test gives a numeric rather than a dichotomous result. The index is represented graphically as the height above the chance line, and it is also equivalent to the area under the curve subtended by a single operating point. Youden's index is also known as deltaP' and generalizes from the dichotomous to the multiclass case as informedness. The use of a single index is "not generally to be recommended",Everitt B.S. (2002) The Cambridge Dictionary of Statistics. CUP but informedness or Youden's index is the probability of an informed decision (as opposed to a random guess) and takes into account all predictions. An unrelated but commonly used combination of basic statistics from
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
is the
F-score In statistics, statistical analysis of binary classification, the F-score or F-measure is a measure of a test's Accuracy_and_precision#In_binary_classification, accuracy. It is calculated from the Precision (information retrieval), precision and ...
, being a (possibly weighted) harmonic mean of recall and precision where
recall Recall may refer to: * Recall (bugle call), a signal to stop * Recall (information retrieval), a statistical measure * ''ReCALL'' (journal), an academic journal about computer-assisted language learning * Recall (memory) * ''Recall'' (Overwatch ...
= sensitivity = true positive rate, but specificity and
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
are totally different measures. F-score, like recall and precision, only considers the so-called positive predictions, with recall being the probability of predicting just the positive class, precision being the probability of a positive prediction being correct, and F-score equating these probabilities under the effective assumption that the positive labels and the positive predictions should have the same distribution and
prevalence In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specific time. It is derived by comparing the number o ...
, similar to the assumption underlying of
Fleiss' kappa Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with ...
. Youden's J, Informedness, Recall, Precision and F-score are intrinsically undirectional, aiming to assess the
deductive Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. if it is impossible for the premises to be true and the conclusion to be false ...
effectiveness of predictions in the direction proposed by a rule, theory or classifier. Markedness (deltaP) is Youden's J used to assess the reverse or
abductive Abductive reasoning (also called abduction,For example: abductive inference, or retroduction) is a form of logical inference formulated and advanced by American philosopher Charles Sanders Peirce beginning in the last third of the 19th centu ...
direction, and matches well human learning of associations; rules and,
superstition A superstition is any belief or practice considered by non-practitioners to be irrational or supernatural, attributed to fate or magic, perceived supernatural influence, or fear of that which is unknown. It is commonly applied to beliefs and ...
s as we model possible causation; while correlation and kappa evaluate bidirectionally.
Matthews correlation coefficient In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a measure of association for two binary variables. In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as a ...
is the
geometric mean In mathematics, the geometric mean is a mean or average which indicates a central tendency of a set of numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean is defined as the ...
of the
regression coefficient In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is cal ...
of the problem and its dual, where the component regression coefficients of the Matthews correlation coefficient are
Markedness In linguistics and social sciences, markedness is the state of standing out as nontypical or divergent as opposed to regular or common. In a marked–unmarked relation, one term of an opposition is the broader, dominant one. The dominant defau ...
(inverse of Youden's J or deltaP) and informedness (Youden's J or deltaP'). Kappa statistics such as
Fleiss' kappa Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with ...
and
Cohen's kappa Cohen's kappa coefficient (''κ'', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure tha ...
are methods for calculating
inter-rater reliability In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent obse ...
based on different assumptions about the marginal or prior distributions, and are increasingly used as ''chance corrected'' alternatives to
accuracy Accuracy and precision are two measures of ''observational error''. ''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value'', while ''precision'' is how close the measurements are to each other ...
in other contexts.
Fleiss' kappa Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with ...
, like F-score, assumes that both variables are drawn from the same distribution and thus have the same expected prevalence, while
Cohen's kappa Cohen's kappa coefficient (''κ'', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure tha ...
assumes that the variables are drawn from distinct distributions and referenced to a model of expectation that assumes
prevalence In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specific time. It is derived by comparing the number o ...
s are independent.{{cite conference , first=David M W , last=Powers , date=2012 , title=The Problem with Kappa , conference=Conference of the European Chapter of the Association for Computational Linguistics , pages=345–355 , hdl=2328/27160 When the true
prevalence In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specific time. It is derived by comparing the number o ...
s for the two positive variables are equal as assumed in Fleiss kappa and F-score, that is the number of positive predictions matches the number of positive classes in the dichotomous (two class) case, the different kappa and correlation measure collapse to identity with Youden's J, and recall, precision and F-score are similarly identical with
accuracy Accuracy and precision are two measures of ''observational error''. ''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value'', while ''precision'' is how close the measurements are to each other ...
.


References

Statistical classification Biostatistics